Goto

Collaborating Authors

 social worker


Lack of trust and racism concerns: Five key failings in Sara Sharif review

BBC News

An independent review of the Sara Sharif case has identified multiple failings from agencies before her murder in Surrey in 2023, following two years of abuse. The child safeguarding practice review, published on Thursday, said there were clearly several points in Sara's life, in particular during the last few months, where different actions could and should have been taken by the authorities. The system failed to keep her safe, it added. Responding to the report, the Children's Commissioner said the case was a catalogue of missed opportunities, poor communication and ill-informed assumptions. The education secretary said there had been the glaring failures across all agencies.


Measuring Stereotype and Deviation Biases in Large Language Models

Wang, Daniel, Brignac, Eli, Mao, Minjia, Fang, Xiao

arXiv.org Artificial Intelligence

Large language models (LLMs) are widely applied across diverse domains, raising concerns about their limitations and potential risks. In this study, we investigate two types of bias that LLMs may display: stereotype bias and deviation bias. Stereotype bias refers to when LLMs consistently associate specific traits with a particular demographic group. Deviation bias reflects the disparity between the demographic distributions extracted from LLM-generated content and real-world demographic distributions. By asking four advanced LLMs to generate profiles of individuals, we examine the associations between each demographic group and attributes such as political affiliation, religion, and sexual orientation. Our experimental results show that all examined LLMs exhibit both significant stereotype bias and deviation bias towards multiple groups. Our findings uncover the biases that occur when LLMs infer user attributes and shed light on the potential harms of LLM-generated outputs.


PreCare: Designing AI Assistants for Advance Care Planning (ACP) to Enhance Personal Value Exploration, Patient Knowledge, and Decisional Confidence

Hsu, Yu Lun, Chou, Yun-Rung, Chang, Chiao-Ju, Chang, Yu-Cheng, Lee, Zer-Wei, Gipiškis, Rokas, Li, Rachel, Shih, Chih-Yuan, Peng, Jen-Kuei, Huang, Hsien-Liang, Tsai, Jaw-Shiun, Chen, Mike Y.

arXiv.org Artificial Intelligence

Advance Care Planning (ACP) enables individuals to document their preferred end-of-life life-sustaining treatments prior to potential incapacitation due to injury or terminal illnesses such as coma, cancer, or dementia. While online ACP platforms offer high accessibility, they often lack essential benefits provided by clinical consultations, including deep introspection of personal values, real-time Q&A on medical treatments, and personalized reviews of decision consequences. T o bridge this gap, we conducted two formative studies: 1) shadowing and interviewing 3 ACP teams consisting of physicians, nurses, and social workers (18 patients total), and 2) interviewing 14 users of ACP websites. Leveraging these insights, we developed PreCare in collaboration with 6 ACP professionals. PreCare is a website featuring 3 AI-driven assistants designed to guide users through exploring personal values, gaining ACP knowledge, and supporting informed decision-making. A usability study (n=12) showed that PreCare achieved a System Usability Scale (SUS) rating of excellent. A comparative evaluation (n=12) showed that PreCare's AI assistants significantly improved exploration of personal values, knowledge, and decisional confidence, and was preferred by 92% of participants. 1 Figure 1.


Exploring the Role of AI-Powered Chatbots for Teens and Young Adults with ASD or Social Anxiety

Mian, Dilan

arXiv.org Artificial Intelligence

The world can be a complex and difficult place to navigate. People with High-Functioning Autistic Spectrum Disorder as well as general social ineptitude often face navigation challenges that individuals of other demographics simply do not themselves. This can become even more pronounced with people of that specific group when they are in their teenage years and early adulthood (that being the usual age range of college students). When they are at such a vulnerable age, they can be far more susceptible to the struggles of becoming comfortable and content with social interactions as well as having strong relationships (outside their immediate family). Concerning this, the rapid emergence of artificial intelligence chatbots has led to many of them being used to benefit people of different ages and demographics with easy accessibility. With this, if there is anything that people with High-Functioning ASD and social ineptitude want when it comes to guidance towards self-improvement, surely easy accessibility would be one. What are the potential benefits and limitations of using a Mindstudio AI-powered chatbot to provide mental health support for teens and young adults with the aforementioned conditions? What could be done with a tool like this to help those individuals navigate ethical dilemmas within different social environments to reduce existing social tensions? This paper addresses these queries and offers insights to inform future discussions on the subject.


Navigating AI in Social Work and Beyond: A Multidisciplinary Review

Dalziel, Matt Victor, Schaffer, Krystal, Martin, Neil

arXiv.org Artificial Intelligence

This review began with the modest goal of drafting a brief commentary on how the social work profession engages with and is impacted by artificial intelligence (AI). However, it quickly became apparent that a deeper exploration was required to adequately capture the profound influence of AI, one of the most transformative and debated innovations in modern history. As a result, this review evolved into an interdisciplinary endeavour, gathering seminal texts, critical articles, and influential voices from across industries and academia. This review aims to provide a comprehensive yet accessible overview, situating AI within broader societal and academic conversations as 2025 dawns. We explore perspectives from leading tech entrepreneurs, cultural icons, CEOs, and politicians alongside the pioneering contributions of AI engineers, innovators, and academics from fields as diverse as mathematics, sociology, philosophy, economics, and more. This review also briefly analyses AI's real-world impacts, ethical challenges, and implications for social work. It presents a vision for AI-facilitated simulations that could transform social work education through Advanced Personalised Simulation Training (APST). This tool uses AI to tailor high-fidelity simulations to individual student needs, providing real-time feedback and preparing them for the complexities of their future practice environments. We maintain a critical tone throughout, balancing our awe of AI's remarkable advancements with necessary caution. As AI continues to permeate every professional realm, understanding its subtleties, challenges, and opportunities becomes essential. Those who fully grasp the intricacies of this technology will be best positioned to navigate the impending AI Era.


Reimagining AI in Social Work: Practitioner Perspectives on Incorporating Technology in their Practice

Wassal, Katie, Ashurst, Carolyn, Hron, Jiri, Zilka, Miri

arXiv.org Artificial Intelligence

There has been a surge in the number and type of AI tools being tested and deployed within both national and local government in the UK, including within the social care sector. Given the many ongoing and planned future developments, the time is ripe to review and reflect on the state of AI in social care. We do so by conducting semi-structured interviews with UK-based social work professionals about their experiences and opinions of past and current AI systems. Our aim is to understand what systems would practitioners like to see developed and how. We find that all our interviewees had overwhelmingly negative past experiences of technology in social care, unanimous aversion to algorithmic decision systems in particular, but also strong interest in AI applications that could allow them to spend less time on administrative tasks. In response to our findings, we offer a series of concrete recommendations, which include commitment to participatory design, as well as the necessity of regaining practitioner trust.


Protected group bias and stereotypes in Large Language Models

Kotek, Hadas, Sun, David Q., Xiu, Zidi, Bowler, Margit, Klein, Christopher

arXiv.org Artificial Intelligence

As modern Large Language Models (LLMs) shatter many state-of-the-art benchmarks in a variety of domains, this paper investigates their behavior in the domains of ethics and fairness, focusing on protected group bias. We conduct a two-part study: first, we solicit sentence continuations describing the occupations of individuals from different protected groups, including gender, sexuality, religion, and race. Second, we have the model generate stories about individuals who hold different types of occupations. We collect >10k sentence completions made by a publicly available LLM, which we subject to human annotation. We find bias across minoritized groups, but in particular in the domains of gender and sexuality, as well as Western bias, in model generations. The model not only reflects societal biases, but appears to amplify them. The model is additionally overly cautious in replies to queries relating to minoritized groups, providing responses that strongly emphasize diversity and equity to an extent that other group characteristics are overshadowed. This suggests that artificially constraining potentially harmful outputs may itself lead to harm, and should be applied in a careful and controlled manner.


7 best audiobooks you didn't know you needed

FOX News

Should we be concerned about our voice being sourced for AI audiobooks? Kurt "CyberGuy" Knutsson delves into the new technology. Do you love reading yet struggle to find the time for it? Don't worry, you can still enjoy a good book without having to sit down and read. Audiobooks are a convenient way to experience a good story while you're doing other things.


Investigation underway after AI tool may have misinterpreted a child's disability as parental neglect

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. For the two weeks that the Hackneys' baby girl lay in a Pittsburgh hospital bed weak from dehydration, her parents rarely left her side, sometimes sleeping on the fold-out sofa in the room. They stayed with their daughter around the clock when she was moved to a rehab center to regain her strength. Finally, the 8-month-old stopped batting away her bottles and started putting on weight again. "She was doing well and we started to ask when can she go home," Lauren Hackney said.


Rehabilitating Homeless: Dataset and Key Insights

Bykova, Anna, Filippov, Nikolay, Yamshchikov, Ivan P.

arXiv.org Artificial Intelligence

This paper presents a large anonymized dataset of homelessness alongside insights into the data-driven rehabilitation of homeless people. The dataset was gathered by a large nonprofit organization working on rehabilitating the homeless for twenty years. This is the first dataset that we know of that contains rich information on thousands of homeless individuals seeking rehabilitation. We show how data analysis can help to make the rehabilitation of homeless people more effective and successful. Thus, we hope this paper alerts the data science community to the problem of homelessness.